Adversarial meta-learning of Gamma-minimax estimators that leverage prior knowledge

نویسندگان

چکیده

Bayes estimators are well known to provide a means incorporate prior knowledge that can be expressed in terms of single distribution. However, when this is too vague express with prior, an alternative approach needed. Gamma-minimax such approach. These minimize the worst-case risk over set Γ distributions compatible available knowledge. Traditionally, Gamma-minimaxity defined for parametric models. In work, we define general models and propose adversarial meta-learning algorithms compute them constrained by generalized moments. Accompanying convergence guarantees also provided. We introduce neural network class provides rich, but finite-dimensional, from which estimator selected. illustrate our method two settings, namely entropy estimation prediction problem arises biodiversity studies.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Beating the Minimax Rate of Active Learning with Prior Knowledge

Active learning refers to the learning protocol where the learner is allowed to choose a subset of instances for labeling. Previous studies have shown that, compared with passive learning, active learning is able to reduce the label complexity exponentially if the data are linearly separable or satisfy the Tsybakov noise condition with parameter κ = 1. In this paper, we propose a novel active l...

متن کامل

Creating illusions of knowledge: learning errors that contradict prior knowledge.

Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors ...

متن کامل

BRIEF REPORT Creating Illusions of Knowledge: Learning Errors That Contradict Prior Knowledge

Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors ...

متن کامل

KBGAN: Adversarial Learning for Knowledge Graph Embeddings

We introduce an adversarial learning framework, which we named KBGAN, to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graph datasets typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventi...

متن کامل

Intelligence, Prior Knowledge, and Learning

Intelligence test scores can account for achievement differences in many content areas to a considerable extent. An individual’s IQ results from complex interactions between genes and environmental stimulation, foremost schooling. The amount of variance in intelligence to be explained by genes is the higher the more successful a society is in providing cognitively stimulating environments for e...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Electronic Journal of Statistics

سال: 2023

ISSN: ['1935-7524']

DOI: https://doi.org/10.1214/23-ejs2151